Skip to main content

Overview

The Personal Assistant in Axoma is an AI-powered chat interface available to all user roles Users, Admins, and Superadmins It offers intelligent conversation capabilities through LLM Chat, DocuChat, or Agentic workflows, providing dynamic access to knowledge across uploaded documents and connected systems.

Launching the Chatbot

Users can simply click “Run” from the application dashboard to launch the chatbot interface and begin a new interactive session. This launches the app in real time and initiates a conversation window tailored for contextual engagement. Axoma App Settings & Management
Before the Personal Assistant can be used, the app must be configured properly by Admins and Superadmins:
  • 1. Create and Draft an App: Admins or Superadmins Create a New App from the Dashboard.The app initially appears in the Draft section.
  • 2. Configure App Settings: Navigate to App Settings, where various foundational elements are managed:
    • Tags: Define categorization labels for documents and data.
    • Groups: Create user groups.
    • Access Rights: Create rights and associate them with specific files.
    • Assign Groups to Access Rights: This determines who can access what files.
📌 Refer to the uploaded architecture image: Users → Groups → Access Rights → Files
  • 3. Model Selection & API Key Verification: Verify the API key created by a Superadmin in Global Settings > LLM Management. Once verified, the key will list:
    • Number of linked models
    • Fallback model (if configured)
  • 4. Model Configuration: In App Settings > Language and Embedding Model:
    • One Embedding Model can be selected.
    • Multiple Language Models can be selected.
  • 5. Knowledge Base: Admins/Superadmins can upload documents up to 15 MB. Each file can have:
    • Tags
    • Access Rights
    • Parser Preferences:
      • Quick: Fast for text-only files (customizable chunk size and overlap).
      • Smart: Balanced for documents with minor visuals.
      • Ultra: Precision-focused for image-heavy or complex documents.
  • 6. Other Settings: Located at App Settings > Other Settings, key components include:
    • System Prompt: Define reusable system-level prompts to guide the AI’s behavior (max 250 characters).
    • User Experience: Toggle key options such as:
      • File attachments
      • Prompt library
      • Chat history
      • Multi-agent support
    • Workflow & Agent Selection: Two modes are supported: a. Workflow-Based Assistant Choose between:
      • LLM Chat
      • DocuChat
      b. Agentic Assistant: Select one or more AI agents created via Agent Management for real-time automation and execution.
Once all the app settings are completed. User is ready to Run the App.

Personal Assistant Interface (End-User Experience)

Once the app is launched from the draft, all users (User, Admin, Superadmin) gain access to the Personal Assistant from the dashboard. Chat Preferences: Users can toggle between:
  • DocuChat
  • LLM Chat
  • Agentic Workflow Chat (if configured)
Axoma App Settings & Management

DocuChat

Users can attach up to 8 documents using the attach 🖇️ icon.
  • Upon attaching click on ’+’ icon to add the specific document to the chat,so users can chat directly with the document contents.
  • If a document was uploaded through the Knowledge Base, apply parsing settings.
  • These documents are shared based on Access Rights.
Axoma App Settings & Management Answer Summary & Source Info When using DocuChat, answers are:
  • Extracted intelligently based on file contents and parser preference.
  • Displayed along with:
    • Paragraph reference
    • File path
    • Document title
Axoma App Settings & Management

LLM Chat

LLM Chat enables users to interact directly with a Large Language Model (LLM) that was configured during the App Setup phase. This chat is designed for general-purpose AI conversations, similar to ChatGPT or other public LLM interfaces.
  • No document or external context is required.
  • Ideal for open-ended queries, brainstorming, summarization, casual Q&A, etc.
  • Powered by models like OpenAI GPT, Anthropic Claude, Google Gemini, etc., depending on the app’s LLM gateway configuration.
  • User input is sent directly to the selected model with no additional processing or tools involved.
Axoma App Settings & Management
Use Case Examples:
  • “Explain quantum computing in simple terms.”
  • “Write a professional email requesting a meeting.”
  • “Summarize the benefits of remote work.”

Agentic Chat

Agentic Chat is enabled when the Agentic Workflow is selected during App Setup. Here, users interact with a custom-built AI Agent created and configured in the Agent Management module. Agents are enhanced versions of LLMs that can have access to Tools
Dashboard > App > App Settings > Other Settings > Workflow & Agent Management
Axoma App Settings & Management Key Characteristics:
  • Requires users to select a specific Agent from a list of available agents.
  • Agents are designed for task-specific or role-specific interactions (e.g., HR assistant, IT helpdesk, Research bot).
  • Can perform dynamic actions, like querying documents, invoking tools, or responding with multi-step reasoning.
  • Custom settings, tools, and context are attached to each Agent, making them more intelligent and interactive.
Use Case Examples:
  • “Search company policy documents for remote work guidelines.”
  • “Create a Jira ticket and assign it to John from the IT team.”
  • “Summarize this uploaded PDF and extract key action points.”
This enables automated interactions, such as API calls or system operations. Axoma App Settings & Management